skip to main content


Search for: All records

Creators/Authors contains: "Sonboli, Nasim"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Recommender systems are poised at the interface between stakeholders: for example, job applicants and employers in the case of recommendations of employment listings, or artists and listeners in the case of music recommendation. In such multisided platforms, recommender systems play a key role in enabling discovery of products and information at large scales. However, as they have become more and more pervasive in society, the equitable distribution of their benefits and harms have been increasingly under scrutiny, as is the case with machine learning generally. While recommender systems can exhibit many of the biases encountered in other machine learning settings, the intersection of personalization and multisidedness makes the question of fairness in recommender systems manifest itself quite differently. In this article, we discuss recent work in the area of multisided fairness in recommendation, starting with a brief introduction to core ideas in algorithmic fairness and multistakeholder recommendation. We describe techniques for measuring fairness and algorithmic approaches for enhancing fairness in recommendation outputs. We also discuss feedback and popularity effects that can lead to unfair recommendation outcomes. Finally, we introduce several promising directions for future research in this area.

     
    more » « less
  2. null (Ed.)
    Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features – informed by the needs of our participants – that could improve user understanding of and trust in fairness-aware recommender systems. 
    more » « less
  3. null (Ed.)
    Comparative experimentation is important for studying reproducibility in recommender systems. This is particularly true in areas without well-established methodologies, such as fairness-aware recommendation. In this paper, we describe fairness-aware enhancements to our recommender systems experimentation tool librec-auto. These enhancements include metrics for various classes of fairness definitions, extension of the experimental model to support result re-ranking and a library of associated re-ranking algorithms, and additional support for experiment automation and reporting. The associated demo will help attendees move quickly to configuring and running their own experiments with librec-auto. 
    more » « less
  4. null (Ed.)
    Recommender systems learn from past user preferences in order to predict future user interests and provide users with personalized suggestions. Previous research has demonstrated that biases in user profiles in the aggregate can influence the recommendations to users who do not share the majority preference. One consequence of this bias propagation effect is miscalibration, a mismatch between the types or categories of items that a user prefers and the items provided in recommendations. In this paper, we conduct a systematic analysis aimed at identifying key characteristics in user profiles that might lead to miscalibrated recommendations. We consider several categories of profile characteristics, including similarity to the average user, propensity towards popularity, profile diversity, and preference intensity. We develop predictive models of miscalibration and use these models to identify the most important features correlated with miscalibration, given different algorithms and dataset characteristics. Our analysis is intended to help system designers predict miscalibration effects and to develop recommendation algorithms with improved calibration properties. 
    more » « less
  5. As recommender systems have become more widespread and moved into areas with greater social impact, such as employment and housing, researchers have begun to seek ways to ensure fairness in the results that such systems produce. This work has primarily focused on developing recommendation approaches in which fairness metrics are jointly optimized along with recommendation accuracy. However, the previous work had largely ignored how individual preferences may limit the ability of an algorithm to produce fair recommendations. Furthermore, with few exceptions, researchers have only considered scenarios in which fairness is measured relative to a single sensitive feature or attribute (such as race or gender). In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results. Specifically, we show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions. 
    more » « less
  6. The field of machine learning fairness has developed metrics, methodologies, and data sets for experimenting with classification algorithms. However, equivalent research is lacking in the area of personalized recommender systems. This 180-minute hands-on tutorial will introduce participants to concepts in fairness-aware recommendation, and metrics and methodologies in evaluating recommendation fairness. Participants will also gain hands-on experience with conducting fairness-aware recommendation experiments with the LibRec recommendation system using the librec-auto scripting platform, and learn the steps required to configure their own experiments, incorporate their own data sets, and design their own algorithms and metrics. 
    more » « less